Multi-index antithetic stochastic gradient algorithm

نویسندگان

چکیده

Abstract Stochastic Gradient Algorithms (SGAs) are ubiquitous in computational statistics, machine learning and optimisation. Recent years have brought an influx of interest SGAs, the non-asymptotic analysis their bias is by now well-developed. However, relatively little known about optimal choice random approximation (e.g mini-batching) gradient SGAs as this relies on variance problem specific. While there been numerous attempts to reduce these typically exploit a particular structure sampled distribution requiring priori knowledge its density’s mode. In paper, we construct Multi-index Antithetic Algorithm (MASGA) whose implementation independent target measure. Our rigorous theoretical demonstrates that for log-concave targets, MASGA achieves performance par with Monte Carlo estimators access unbiased samples from interest. other words, estimator mean square error-computational cost perspective within class estimators. To illustrate robustness our approach, implement also some simple non-log-concave numerical examples, however, without providing guarantees algorithm’s such settings.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved Stochastic gradient descent algorithm for SVM

In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the l...

متن کامل

Optimal Quantization: Evolutionary Algorithm vs Stochastic Gradient

We propose a new method based on evolutionary optimization for obtaining an optimal L-quantizer of a multidimensional random variable. First, we remind briefly the main results about quantization. Then, we present the classical gradient-based approach (this approach is well detailed in [2] and [7] for p=2) used up to now to find a “local” optimal L-quantizer. Then, we give an algorithm that per...

متن کامل

DSA: Decentralized Double Stochastic Averaging Gradient Algorithm

This paper considers convex optimization problems where nodes of a network have access to summands of a global objective. Each of these local objectives is further assumed to be an average of a finite set of functions. The motivation for this setup is to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements. The decentr...

متن کامل

When Does Stochastic Gradient Algorithm Work Well?

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhoo...

متن کامل

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistics and Computing

سال: 2023

ISSN: ['0960-3174', '1573-1375']

DOI: https://doi.org/10.1007/s11222-023-10220-8